1,063 research outputs found

    Jacobi Fiber Surfaces for Bivariate Reeb Space Computation

    Get PDF
    This paper presents an efficient algorithm for the computation of the Reeb space of an input bivariate piecewise linear scalar function f defined on a tetrahedral mesh. By extending and generalizing algorithmic concepts from the univariate case to the bivariate one, we report the first practical, output-sensitive algorithm for the exact computation of such a Reeb space. The algorithm starts by identifying the Jacobi set of f , the bivariate analogs of critical points in the univariate case. Next, the Reeb space is computed by segmenting the input mesh along the new notion of Jacobi Fiber Surfaces, the bivariate analog of critical contours in the univariate case. We additionally present a simplification heuristic that enables the progressive coarsening of the Reeb space. Our algorithm is simple to implement and most of its computations can be trivially parallelized. We report performance numbers demonstrating orders of magnitude speedups over previous approaches, enabling for the first time the tractable computation of bivariate Reeb spaces in practice. Moreover, unlike range-based quantization approaches (such as the Joint Contour Net), our algorithm is parameter-free. We demonstrate the utility of our approach by using the Reeb space as a semi-automatic segmentation tool for bivariate data. In particular, we introduce continuous scatterplot peeling, a technique which enables the reduction of the cluttering in the continuous scatterplot, by interactively selecting the features of the Reeb space to project. We provide a VTK-based C++ implementation of our algorithm that can be used for reproduction purposes or for the development of new Reeb space based visualization techniques

    Older Adults’ Perceptions of Intergenerational Support After Widowhood: How Do Men and Women Differ?

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/142784/1/Nesse-Ha-InternSupport-FamIssues-2006.pd

    Direct Multifield Volume Ray Casting of Fiber Surfaces

    Get PDF
    Multifield data are common in visualization. However, reducing these data to comprehensible geometry is a challenging problem. Fiber surfaces, an analogy of isosurfaces to bivariate volume data, are a promising new mechanism for understanding multifield volumes. In this work, we explore direct ray casting of fiber surfaces from volume data without any explicit geometry extraction. We sample directly along rays in domain space, and perform geometric tests in range space where fibers are defined, using a signed distance field derived from the control polygons. Our method requires little preprocess, and enables real-time exploration of data, dynamic modification and pixel-exact rendering of fiber surfaces, and support for higher-order interpolation in domain space. We demonstrate this approach on several bivariate datasets, including analysis of multi-field combustion data

    A selective newsvendor approach to order management

    Full text link
    Consider a supplier offering a product to several potential demand sources, each with a unique revenue, size, and probability that it will materialize. Given a long procurement lead time, the supplier must choose the orders to pursue and the total quantity to procure prior to the selling season. We model this as a selective newsvendor problem of maximizing profits where the total (random) demand is given by the set of pursued orders. Given that the dimensionality of a mixed-integer linear programming formulation of the problem increases exponentially with the number of potential orders, we develop both a tailored exact algorithm based on the L-shaped method for two-stage stochastic programming as well as a heuristic method. We also extend our solution approach to account for piecewise-linear cost and revenue functions as well as a multiperiod setting. Extensive experimentation indicates that our exact approach rapidly finds optimal solutions with three times as many orders as a state-of-the-art commercial solver. In addition, our heuristic approach provides average gaps of less than 1% for the largest problems that can be solved exactly. Observing that the gaps decrease as problem size grows, we expect the heuristic approach to work well for large problem instances. © 2008 Wiley Periodicals, Inc. Naval Research Logistics 2008Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/61330/1/20320_ftp.pd

    Robust diagnostic genetic testing using solution capture enrichment and a novel variant-filtering interface.

    Get PDF
    Targeted hybridization enrichment prior to next-generation sequencing is a widespread method for characterizing sequence variation in a research setting, and is being adopted by diagnostic laboratories. However, the number of variants identified can overwhelm clinical laboratories with strict time constraints, the final interpretation of likely pathogenicity being a particular bottleneck. To address this, we have developed an approach in which, after automatic variant calling on a standard unix pipeline, subsequent variant filtering is performed interactively, using AgileExomeFilter and AgilePindelFilter (http://dna.leeds.ac.uk/agile), tools designed for clinical scientists with standard desktop computers. To demonstrate the method's diagnostic efficacy, we tested 128 patients using (1) a targeted capture of 36 cancer-predisposing genes or (2) whole-exome capture for diagnosis of the genetically heterogeneous disorder primary ciliary dyskinesia (PCD). In the cancer cohort, complete concordance with previous diagnostic data was achieved across 793 variant genotypes. A high yield (42%) was also achieved for exome-based PCD diagnosis, underscoring the scalability of our method. Simple adjustments to the variant filtering parameters further allowed the identification of a homozygous truncating mutation in a presumptive new PCD gene, DNAH8. These tools should allow diagnostic laboratories to expand their testing portfolios flexibly, using a standard set of reagents and techniques

    Scalable Contour Tree Computation by Data Parallel Peak Pruning

    Get PDF
    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. We report the first shared SMP algorithm for fully parallel contour tree computation, with formal guarantees of O(lgnlgt) parallel steps and O(nlgn) work, and implementations with more than 30× parallel speed up on both CPU using TBB and GPU using Thrust and up 70× speed up compared to the serial sweep and merge algorithm

    A neuron-specific cytoplasmic dynein isoform preferentially transports TrkB signaling endosomes

    Get PDF
    Cytoplasmic dynein is the multisubunit motor protein for retrograde movement of diverse cargoes to microtubule minus ends. Here, we investigate the function of dynein variants, defined by different intermediate chain (IC) isoforms, by expressing fluorescent ICs in neuronal cells. Green fluorescent protein (GFP)–IC incorporates into functional dynein complexes that copurify with membranous organelles. In living PC12 cell neurites, GFP–dynein puncta travel in both the anterograde and retrograde directions. In cultured hippocampal neurons, neurotrophin receptor tyrosine kinase B (TrkB) signaling endosomes are transported by cytoplasmic dynein containing the neuron-specific IC-1B isoform and not by dynein containing the ubiquitous IC-2C isoform. Similarly, organelles containing TrkB isolated from brain by immunoaffinity purification also contain dynein with IC-1 but not IC-2 isoforms. These data demonstrate that the IC isoforms define dynein populations that are selectively recruited to transport distinct cargoes

    Massive stars as thermonuclear reactors and their explosions following core collapse

    Full text link
    Nuclear reactions transform atomic nuclei inside stars. This is the process of stellar nucleosynthesis. The basic concepts of determining nuclear reaction rates inside stars are reviewed. How stars manage to burn their fuel so slowly most of the time are also considered. Stellar thermonuclear reactions involving protons in hydrostatic burning are discussed first. Then I discuss triple alpha reactions in the helium burning stage. Carbon and oxygen survive in red giant stars because of the nuclear structure of oxygen and neon. Further nuclear burning of carbon, neon, oxygen and silicon in quiescent conditions are discussed next. In the subsequent core-collapse phase, neutronization due to electron capture from the top of the Fermi sea in a degenerate core takes place. The expected signal of neutrinos from a nearby supernova is calculated. The supernova often explodes inside a dense circumstellar medium, which is established due to the progenitor star losing its outermost envelope in a stellar wind or mass transfer in a binary system. The nature of the circumstellar medium and the ejecta of the supernova and their dynamics are revealed by observations in the optical, IR, radio, and X-ray bands, and I discuss some of these observations and their interpretations.Comment: To be published in " Principles and Perspectives in Cosmochemistry" Lecture Notes on Kodai School on Synthesis of Elements in Stars; ed. by Aruna Goswami & Eswar Reddy, Springer Verlag, 2009. Contains 21 figure
    corecore